Previous Blogs

May 15, 2018
Device Independence Becoming Real

May 8, 2018
Bringing Vision to the Edge

May 1, 2018
The Shifting Enterprise Computing Landscape

April 24, 2018
The "Not So" Late, "And Still" Great Desktop PC

April 17, 2018
The Unseen Opportunities of AR and VR

April 10, 2018
The New Security Reality

April 3, 2018
Making AI Real

March 27, 2018
Will IBM Apple Deal Let Watson Replace Siri For Business Apps?

March 20, 2018
Edge Servers Will Redefine the Cloud

March 13, 2018
Is it Too Late for Data Privacy?

March 6, 2018
The Hidden Technology Behind Modern Smartphones

February 27, 2018
The Surprising Highlight of MWC: Audio

February 20, 2018
The Blurring Lines for 5G

February 13, 2018
The Modern State of WiFi

February 6, 2018
Wearables to Benefit from Simplicity

January 30, 2018
Smartphone Market Challenges Raise Major Questions

January 23, 2018
Hardware-Based AI

January 16, 2018
The Tech Industry Needs Functional Safety

January 9, 2018
Will AI Power Too Many Smart Home Devices?

January 2, 2018
Top Tech Predictions for 2018

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

May 22, 2018
The World of AI Is Still Taking Baby Steps

By Bob O'Donnell

Given all the press surrounding it, it’s easy to be confused. After all, if you believe everything you read, you’d think we’re practically in an artificial intelligence (AI)-controlled world already, and it’s only a matter of time before the machines take over.

Except, well, a quick reality check will easily show that that perspective is far from the truth. To be sure, AI has had a profound impact on many different aspects of our lives—from smart personal assistants to semi-autonomous cars to chatbot-based customer service agents and much more—but the overall real-world influence of AI is still very modest.

Part of the confusion stems from a misunderstanding of AI. Thanks to a number of popular, influential science fiction movies, many people associate AI with a smart, broad-based intelligence that can enable something like the nasty, people-hating world of Skynet from Terminator movies. In reality, however, most AI applications of today and the near future are very practical—and, therefore, much less exciting.

Leveraging AI-based computer vision on a drone to notice a crack on an oil pipeline, for example, is a great real-world AI application, but it’s hardly the stuff of AI-inspired nightmares. Similarly, there are many other examples of very practical applications that can leverage the pattern recognition-based capabilities of AI, but do so in a real-world way that not only isn’t scary, but frankly, isn’t that far advanced beyond other types of analytics-based applications.

Even the impressive Google Duplex demos from their recent I/O event may not be quite as awe-inspiring as they first appeared. Amongst many other issues, it turns out Duplex was specifically trained to just make haircut appointments and dinner reservations—not doctor’s appointments, coordinating a night out with friends, or any of the multitude of other real-world scenarios that the voice assistant-driven phone calls that the Duplex demo implied were possible.

Most AI-based activities are still extraordinarily literal. So, if there’s an AI-based app that can recognize dogs in photos, for example, that’s all it can do. It can’t recognize other animal species, let alone distinct varieties, or serve as a general object detection and identification service. While it’s easy to presume that applications that can identify specific dog species offer similar intelligence across other objects, it’s simply not the case. We’re not dealing with a general intelligence when it comes to AI, but a very specific intelligence that’s highly dependent on the data that it’s been fed.

I point this out not to denigrate the incredible capabilities that AI has already delivered across a wide variety of applications, but simply to clarify that we can’t think about artificial intelligence in the same way that we do about human-type intelligence. AI-based advances are amazing, but they needn’t be feared as a near-term harbinger of crazy, terrible, scary things to come. While I’m certainly not going to deny the potential to create some very nasty outcomes from AI-based applications in a decade or two, in the near and medium-term future, they’re not only not likely, they’re not even technically possible.

Instead, what we should concentrate on in the near-term is the opportunity to apply the very focused capabilities of AI onto important (but not necessarily groundbreaking) real-world challenges. This means things like improving the efficiency or reducing the fault rate on manufacturing lines or providing more intelligent answers to our smart speaker queries. There are also more important potential outcomes, such as more accurately recognizing cancer in X-rays and CAT scans, or helping to provide an unbiased decision about whether or not to extend a loan to a potential banking customer.

Along the way, it’s also important to think about the tools that can help drive a faster, more efficient AI experience. For many organizations, that means a growing concentration on new types of compute architectures, such as GPUs, FPGAs, DSPs, and AI-specific chip implementations, all of which have been shown to offer advantages over traditional CPUs in certain types of AI training and inferencing-focused applications. At the same time, it’s critically important to look at tools that can offer easier, more intelligible access to these new environments, whether that be software languages like Nvidia’s CUDA platform for GPUs, National Instruments’ LabView tool for programming FPGAs, and other similar tools.

Ultimately, we will see AI-based applications deliver an incredible amount of new capability, the most important of which, in the near-term, will be to make smart devices actually “real-world” smart. Way too many people are frustrated by the lack of “intelligence” on many of their digital devices, and I expect to see many of the first key advances in AI to be focused on these basic applications. Eventually, we’ll see a wide range of very advanced capabilities as well, but in the short term, it’s important to remember that the phrase artificial intelligence actually implies much less than it first appears.

Here's a link to the column: https://techpinions.com/the-world-of-ai-is-still-taking-baby-steps/52942

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE